Strong Truthfulness in Multi-Task Peer Prediction
نویسندگان
چکیده
The problem of peer prediction is to elicit information from agents in settings without any objective ground truth against which to score reports. Peer prediction mechanisms seek to exploit correlations between signals to align incentives with truthful reports. A long-standing concern has been the possibility of uninformative equilibria. For binary signals, the Dasgupta-Ghosh output agreement (OA) mechanism [2] leverages reports across multiple tasks to achieve strong truthfulness, so that the truthful equilibrium maximizes payoff. In this paper, we first characterize conditions on the signal distribution for which the OA mechanism remains strongly-truthful with non-binary signals. Our analysis also yields a greatly simplified proof of their binary-signal, strong truthfulness result. We then introduce the 01 mechanism, which extends the OA mechanism to multiple signals, with a slightly weaker incentive property: no strategy provides more payoff in equilibrium than truthful reporting, and truthful reporting is strictly better than any uninformed strategy (where an agent avoids the effort of even obtaining a signal). In an analysis of peer-grading data from a large MOOC platform, we investigate how well student reports fit our model, and conclude that the 01 mechanism would be appropriate for use in this domain.
منابع مشابه
Informed Truthfulness in Multi-Task Peer Prediction (Working paper)
The problem of peer prediction is to elicit information from agents in settings without any objective ground truth against which to score reports. Peer prediction mechanisms seek to exploit correlations between signals to align incentives with truthful reports. A long-standing concern has been the possibility of uninformative equilibria. For binary signals, a multi-task output agreement (OA) me...
متن کاملPartial Truthfulness in Minimal Peer Prediction Mechanisms with Limited Knowledge
We study minimal single-task peer prediction mechanisms that have limited knowledge about agents’ beliefs. Without knowing what agents’ beliefs are or eliciting additional information, it is not possible to design a truthful mechanism in a Bayesian-Nash sense. We go beyond truthfulness and explore equilibrium strategy profiles that are only partially truthful. Using the results from the multi-a...
متن کاملInformed Truthfulness for Multi-Task Peer Prediction (short paper)
We study the problem of information elicitation without verification (“peer prediction”) [Miller et al. 2005]. This problem arises across a diverse range of systems, in which participants are asked to respond to an information task, and where there is no external input available against which to score reports (or any such external input is costly). Examples include completing surveys about the ...
متن کاملSurrogate Scoring Rules and a Dominant Truth Serum for Information Elicitation
We study information elicitation without verification (IEWV) and ask the following question: Can we achieve truthfulness in dominant strategy in IEWV? is paper considers two elicitation seings. e first seing is when the mechanism designer has access to a random variable that is a noisy or proxy version of the ground truth, with known biases. e second seing is the standard peer prediction ...
متن کاملSelf – Others Rating Discrepancy of Task and Contextual Performance
This research compared ratings of task performance and contextual performance from three different sources: self, peer, and supervisor. Participants were service industry employees in the service industries in Yogyakarta, Indonesia. A Sample of 146 employees and 40 supervisors from the service industries provided ratings of task performance and contextual performance. The results indicated th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2016